perceptual similarity metric
Generating Images with Perceptual Similarity Metrics based on Deep Networks
We propose a class of loss functions, which we call deep perceptual similarity metrics (DeePSiM), allowing to generate sharp high resolution images from compressed abstract representations. Instead of computing distances in the image space, we compute distances between image features extracted by deep neural networks. This metric reflects perceptual similarity of images much better and, thus, leads to better results. We demonstrate two examples of use cases of the proposed loss: (1) networks that invert the AlexNet convolutional network; (2) a modified version of a variational autoencoder that generates realistic high-resolution random images.
Reviews: Generating Images with Perceptual Similarity Metrics based on Deep Networks
I think the most important contribution of the manuscript is to describe a method that substantially improves image reconstruction from compressed deep network representations (e.g. In that regard I would have liked an analysis of the compression rate for the reconstructions from the different feature spaces. In particular because the difference in quality between layer conv5 and fc6 doesn't seem too large, whereas there is a 10-fold reduction in dimensionality ((13 x 13 x 256 43k vs. to 4k) in the feature representation. One factor that should definitely be discussed in the paper is that it appears as if the adversarial prior enforces to use image data from the training set in the reconstruction. This is not necessarily a problem in terms of image compression, but is an important factor: a careful choice of training data might be important depending on what type of images one wants to compress.
LipSim: A Provably Robust Perceptual Similarity Metric
Ghazanfari, Sara, Araujo, Alexandre, Krishnamurthy, Prashanth, Khorrami, Farshad, Garg, Siddharth
Recent years have seen growing interest in developing and applying perceptual similarity metrics. Research has shown the superiority of perceptual metrics over pixel-wise metrics in aligning with human perception and serving as a proxy for the human visual system. On the other hand, as perceptual metrics rely on neural networks, there is a growing concern regarding their resilience, given the established vulnerability of neural networks to adversarial attacks. It is indeed logical to infer that perceptual metrics may inherit both the strengths and shortcomings of neural networks. In this work, we demonstrate the vulnerability of state-of-the-art perceptual similarity metrics based on an ensemble of ViT-based feature extractors to adversarial attacks. We then propose a framework to train a robust perceptual similarity metric called LipSim (Lipschitz Similarity Metric) with provable guarantees. By leveraging 1-Lipschitz neural networks as the backbone, LipSim provides guarded areas around each data point and certificates for all perturbations within an $\ell_2$ ball. Finally, a comprehensive set of experiments shows the performance of LipSim in terms of natural and certified scores and on the image retrieval application. The code is available at https://github.com/SaraGhazanfari/LipSim.
- North America > United States > New York (0.04)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- Information Technology > Security & Privacy (0.70)
- Government > Military (0.56)
Generating Images with Perceptual Similarity Metrics based on Deep Networks
Dosovitskiy, Alexey, Brox, Thomas
We propose a class of loss functions, which we call deep perceptual similarity metrics (DeePSiM), allowing to generate sharp high resolution images from compressed abstract representations. Instead of computing distances in the image space, we compute distances between image features extracted by deep neural networks. This metric reflects perceptual similarity of images much better and, thus, leads to better results. We demonstrate two examples of use cases of the proposed loss: (1) networks that invert the AlexNet convolutional network; (2) a modified version of a variational autoencoder that generates realistic high-resolution random images. Papers published at the Neural Information Processing Systems Conference.